Skip to content

Make Bazel more verbose in CI builds.#1230

Merged
htuch merged 1 commit intoenvoyproxy:masterfrom
dnoe:make_ci_bazel_noisier
Jul 10, 2017
Merged

Make Bazel more verbose in CI builds.#1230
htuch merged 1 commit intoenvoyproxy:masterfrom
dnoe:make_ci_bazel_noisier

Conversation

@dnoe
Copy link
Contributor

@dnoe dnoe commented Jul 10, 2017

Travis has complained about extended periods of no output, especially on
the coverage build. This should make Bazel print something after every
step finishes which will mitigate this issue or help debug if there is a
real build stall.

Travis has complained about extended periods of no output, especially on
the coverage build. This should make Bazel print something after every
step finishes which will mitigate this issue or help debug if there is a
real build stall.
@dnoe dnoe requested a review from htuch July 10, 2017 14:16
@htuch
Copy link
Member

htuch commented Jul 10, 2017

Looking at the example coverage log, it doesn't seem much more verbose..

@htuch
Copy link
Member

htuch commented Jul 10, 2017

I see, during build stage it at least produces some more useful output, sure.

@htuch htuch merged commit 083be75 into envoyproxy:master Jul 10, 2017
@dnoe
Copy link
Contributor Author

dnoe commented Jul 10, 2017

Hoping it is just enough more verbose...

jpsim pushed a commit that referenced this pull request Nov 28, 2022
Description: this PR updates envoy and makes the following changes:
- linux CI jobs now use the envoy-build-ubuntu container published by [envoy-build-tools](https://github.com/envoyproxy/envoy-build-tools)
- updates upload/download-artifact to v2.
- specified using remote jdk toolchain for the coverage build.
- temporarily moves java and kt unit tests to macOS. Subsequent PR will move as many CI jobs as possible to run on linux runners, as this saves time via container execution; and concurrency as the linux limit is higher than the macOS limit.
- updates protobuf dep to 3.14.0
- updates metrics transport to V3 Api
- updates bazel build targets used in several places in the codebase
- Fixes warnings in C++ tests.

Risk Level: low for all jobs ran in CI. High for release and artifact jobs as they won't execute until a release is done, and they are most likely broken. Will fix after running.
Testing: CI

Signed-off-by: Jose Nino <jnino@lyft.com>

Co-authored-by: Jose Nino <jnino@lyft.com>
Signed-off-by: JP Simard <jp@jpsim.com>
jpsim pushed a commit that referenced this pull request Nov 29, 2022
Description: this PR updates envoy and makes the following changes:
- linux CI jobs now use the envoy-build-ubuntu container published by [envoy-build-tools](https://github.com/envoyproxy/envoy-build-tools)
- updates upload/download-artifact to v2.
- specified using remote jdk toolchain for the coverage build.
- temporarily moves java and kt unit tests to macOS. Subsequent PR will move as many CI jobs as possible to run on linux runners, as this saves time via container execution; and concurrency as the linux limit is higher than the macOS limit.
- updates protobuf dep to 3.14.0
- updates metrics transport to V3 Api
- updates bazel build targets used in several places in the codebase
- Fixes warnings in C++ tests.

Risk Level: low for all jobs ran in CI. High for release and artifact jobs as they won't execute until a release is done, and they are most likely broken. Will fix after running.
Testing: CI

Signed-off-by: Jose Nino <jnino@lyft.com>

Co-authored-by: Jose Nino <jnino@lyft.com>
Signed-off-by: JP Simard <jp@jpsim.com>
mathetake pushed a commit that referenced this pull request Mar 3, 2026
…t model (#1230)

**Description**

This implements `OriginalModel` as the model name extracted from the
incoming request body before any virtualization applies. In doing so, we
allow metrics such as "gen_ai.server.token.usage" to be aggregated
either on the model the router received (`OriginalModel`) or an override
(`RequestModel`).

Flow:
 1. Router filter extracts model from request body
2. If ModelNameOverride is configured, RequestModel differs from
OriginalModel
 3. Provider responds with ResponseModel (may differ from RequestModel)

Example:
 1. `OriginalModel`: OpenAI Client sends: {"model": "gpt-5"}
 2. `RequestModel`: ModelNameOverride replaces with "gpt-5-nano"
3. `ResponseModel`: OpenAI Platform sends: {"model":
"gpt-5-nano-2025-08-07"}

OpenTelemetry:

In OpenTelemetry Generative AI Metrics, this is an attribute on metrics
such as "gen_ai.server.token.usage". For example, an OpenAI Chat
Completion request to the "gpt-5" model results in a plain text string
attribute: "gen_ai.original.model" -> "gpt-5"

**Related Issues/PRs (if applicable)**

Builds on #1219

---------

Signed-off-by: Adrian Cole <adrian@tetrate.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants